Specializing Word Embeddings for Similarity or Relatedness
نویسندگان
چکیده
We demonstrate the advantage of specializing semantic word embeddings for either similarity or relatedness. We compare two variants of retrofitting and a joint-learning approach, and find that all three yield specialized semantic spaces that capture human intuitions regarding similarity and relatedness better than unspecialized spaces. We also show that using specialized spaces in NLP tasks and applications leads to clear improvements, for document classification and synonym selection, which rely on either similarity or relatedness but not both.
منابع مشابه
Injecting Word Embeddings with Another Language's Resource : An Application of Bilingual Embeddings
Word embeddings learned from text corpus can be improved by injecting knowledge from external resources, while at the same time also specializing them for similarity or relatedness. These knowledge resources (like WordNet, Paraphrase Database) may not exist for all languages. In this work we introduce a method to inject word embeddings of a language with knowledge resource of another language b...
متن کاملEnhanced Word Representations for Bridging Anaphora Resolution
Most current models of word representations (e.g., GloVe) have successfully captured finegrained semantics. However, semantic similarity exhibited in these word embeddings is not suitable for resolving bridging anaphora, which requires the knowledge of associative similarity (i.e., relatedness) instead of semantic similarity information between synonyms or hypernyms. We create word embeddings (...
متن کاملUnsupervised Low-Dimensional Vector Representations for Words, Phrases and Text that are Transparent, Scalable, and produce Similarity Metrics that are Complementary to Neural Embeddings
Neural embeddings are a popular set of methods for representing words, phrases or text as a low dimensional vector (typically 50-500 dimensions). However, it is difficult to interpret these dimensions in a meaningful manner, and creating neural embeddings requires extensive training and tuning of multiple parameters and hyperparameters. We present here a simple unsupervised method for represent...
متن کاملMeasuring Semantic Similarity and Relatedness with Distributional and Knowledge-based Approaches
This paper provides a survey of different techniques for measuring semantic similarity and relatedness of word pairs. This covers both knowledge-based approaches exploiting taxonomies like WordNet, and corpus-based approaches which rely on distributional statistics. We introduce these techniques, provide evaluations of their result performance, and discuss their merits and shortcomings. A speci...
متن کاملImprove Lexicon-based Word Embeddings By Word Sense Disambiguation
There have been some works that learn a lexicon together with the corpus to improve the word embeddings. However, they either model the lexicon separately but update the neural networks for both the corpus and the lexicon by the same likelihood, or minimize the distance between all of the synonym pairs in the lexicon. Such methods do not consider the relatedness and difference of the corpus and...
متن کامل